Visual place recognition (VPR) is usually considered as a specific image retrieval problem. Limited by existing training frameworks, most deep learning-based works cannot extract sufficiently stable global features from RGB images and rely on a time-consuming re-ranking step to exploit spatial structural information for better performance. In this paper, we propose StructVPR, a novel training architecture for VPR, to enhance structural knowledge in RGB global features and thus improve feature stability in a constantly changing environment. Specifically, StructVPR uses segmentation images as a more definitive source of structural knowledge input into a CNN network and applies knowledge distillation to avoid online segmentation and inference of seg-branch in testing. Considering that not all samples contain high-quality and helpful knowledge, and some even hurt the performance of distillation, we partition samples and weigh each sample's distillation loss to enhance the expected knowledge precisely. Finally, StructVPR achieves impressive performance on several benchmarks using only global retrieval and even outperforms many two-stage approaches by a large margin. After adding additional re-ranking, ours achieves state-of-the-art performance while maintaining a low computational cost.
translated by 谷歌翻译
在3D视觉中,视觉重新定位已被广泛讨论:鉴于预构建的3D视觉图,估计查询图像的6 DOF(自由度)姿势。大规模室内环境中的重新定位可实现有吸引力的应用程序,例如增强现实和机器人导航。但是,当相机移动时,在这种环境中,外观变化很快,这对于重新定位系统来说是具有挑战性的。为了解决这个问题,我们建议一种基于虚拟视图综合方法Rendernet,以丰富有关此特定情况的数据库和完善姿势。我们选择直接渲染虚拟观点的必要全局和本地特征,而不是渲染需要高质量3D模型的真实图像,并分别将它们应用于后续图像检索和功能匹配操作中。所提出的方法在很大程度上可以改善大规模室内环境中的性能,例如,在INLOC数据集中获得7.1 \%和12.2 \%的改善。
translated by 谷歌翻译
由于其捕获远程依赖性的能力,变压器在许多愿景任务中取得了成功。然而,它们的二次计算复杂性构成了将它们应用于需要密集预测的视觉任务的主要障碍,例如对象检测,特征匹配,立体声等。我们引入四叉树的关注,这降低了从二次到线性的计算复杂性。我们的Quadtree变压器构建令牌金字塔,并以粗糙的方式计算注意力。在每个级别,选择具有最高关注分数的顶部K补丁,使得在下一级别,仅关注对应于这些顶部K个补丁的相关区域内。我们表明Quadtree注意在各种视觉任务中实现了最先进的性能,例如,在SCANNET匹配上有4.0%的特征匹配,立体匹配的拖鞋约为50%,提高了Imagenet分类的14-1.5%,对Coco对象检测的提高1.2-1.8%,改进0.7-2.4%以前的最先进变换器的语义分割。该代码可在https://github.com/tangshitao/quadtreeeattention上获得}:htps://github.com/tangshitao/quadtreeattention。
translated by 谷歌翻译
虽然现有的脸部防欺骗(FAS)方法在域内实验中实现高精度,但由于普遍性较差,它们的效果严重陷入跨域情景。最近,已经探索了多种技巧,例如域泛化和代表性解剖。然而,改进仍然有限有两个问题:1)很难将所有面向共享特征空间的所有面。如果来自未知域的面不映射到共享特征空间中的已知区域,则会意外地获得不准确的预测。 2)很难完全考虑用于解剖学的各种欺骗痕迹。在本文中,我们提出了一个特征生成和假设验证框架来缓解两个问题。最重要的是,在FAS任务中第一次引入生成真实面和已知攻击的假设的特征生成网络。随后,应用两个假设验证模块来判断输入面是否分别来自真实面积和实体面分布。此外,给出了我们框架和贝叶斯不确定性估计之间关系的一些分析,为未知域中的可靠防御提供了理论支持。实验结果表明,我们的框架实现了有希望的结果,优于最先进的公共数据集的最先进的方法。
translated by 谷歌翻译
无数据知识蒸馏(DFKD)最近一直吸引了研究社区的越来越关注,归因于其仅使用合成数据压缩模型的能力。尽管取得了令人鼓舞的成果,但最先进的DFKD方法仍然患有数据综合的低效率,使得无数据培训过程非常耗时,因此可以对大规模任务进行不适当的。在这项工作中,我们介绍了一个被称为FastDFKD的有效方案,使我们能够将DFKD加速到数量级。在我们的方法中,我们的方法是一种重用培训数据中共享共同功能的新策略,以便综合不同的数据实例。与先前的方法独立优化一组数据,我们建议学习一个Meta合成器,该综合仪寻求常见功能作为快速数据合成的初始化。因此,FastDFKD仅在几个步骤内实现数据综合,显着提高了无数据培训的效率。在CiFAR,NYUV2和Imagenet上的实验表明,所提出的FastDFKD实现了10美元\时代$甚至100美元\倍$加速,同时保持与现有技术的表现。
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译